我们考虑了自主渠道访问(AutoCA)的问题,其中一组终端试图以分布式方式通过常见的无线通道发现具有访问点(AP)的通信策略。由于拓扑不规则和终端的通信范围有限,因此对AutoCA的实用挑战是隐藏的终端问题,在无线网络中臭名昭著,可以使吞吐量和延迟性能恶化。为了应对挑战,本文提出了一种新的多代理深钢筋学习范式,该学习范式被称为Madrl-HT,在存在隐藏码头的情况下为Autoca量身定制。 MADRL-HT利用拓扑见解,并将每个终端的观察空间转变为独立于终端数量的可扩展形式。为了补偿部分可观察性,我们提出了一种外观机制,以便终端可以从载体感知的通道状态以及AP的反馈中推断出其隐藏终端的行为。提出了基于窗口的全球奖励功能,从而指示终端在学习过程中平衡终端的传输机会,以最大程度地提高系统吞吐量。广泛的数值实验验证了我们的解决方案基准测试的优越性能,并通过避免碰撞(CSMA/CA)方案对旧的载体 - 义值访问。
translated by 谷歌翻译
作为使用最广泛的金属管弯曲方法之一,旋转拉动弯曲(RDB)过程可实现可靠和高精度的金属管弯曲(MTBF)。形成准确性受到回避和其他潜在形成缺陷的严重影响,其机制分析很难处理。同时,现有方法主要是在离线空间中进行的,忽略了物理世界中的实时信息,这是不可靠且效率低下的。为了解决这个问题,提出了基于多源输入多任务学习(MTL)的数字增强(DT增强)金属管弯曲弯曲的实时预测方法。新方法可以实现全面的MTBF实时预测。通过共享多关闭域的共同特征并在功能共享和接受层上采用组正规化策略,可以保证多源输入MTL的准确性和效率。通过DT增强,物理实时变形数据通过改进的格莱美角度场(GAF)转换在图像维度中对齐,从而实现了实际处理的反射。与传统的离线预测方法不同,新方法集成了虚拟和物理数据,以实现更有效,更准确的实时预测结果。可以实现虚拟系统和物理系统之间的DT映射连接。为了排除设备误差的影响,在物理实验验证的FE模拟方案上验证了所提出的方法的有效性。同时,将通用的预训练网络与提出的方法进行比较。结果表明,所提出的DT增强预测方法更准确和有效。
translated by 谷歌翻译
照片逼真的面部视频肖像重演益处虚拟生产和众多VR / AR经验。由于肖像应该保持高现实主义和与目标环境的一致性,任务仍然具有挑战性。在本文中,我们介绍了一种可靠的神经视频肖像,同步的致密和再生方案,其将头部姿势和面部表达从源actor传送到具有任意新的背景和照明条件的目标演员的肖像视频。我们的方法结合了4D反射场学习,基于模型的面部性能捕获和目标感知神经渲染。具体地,我们采用渲染到视频翻译网络首先从混合面部性能捕获结果中合成高质量的OLAT镜片和alpha锍。然后,我们设计了一个语义感知的面部归一化方案,以实现可靠的显式控制以及多帧多任务学习策略,以同时编码内容,分割和时间信息以获得高质量的反射场推断。在培训之后,我们的方法进一步实现了目标表演者的照片现实和可控的视频肖像编辑。通过将相同的混合面部捕获和归一化方案应用于源视频输入,可以获得可靠的面部姿势和表达编辑,而我们的显式alpha和Olat输出使高质量的依据和背景编辑能够实现。凭借实现同步致密和再生的能力,我们能够改善各种虚拟生产和视频重写应用程序的现实主义。
translated by 谷歌翻译
Human modeling and relighting are two fundamental problems in computer vision and graphics, where high-quality datasets can largely facilitate related research. However, most existing human datasets only provide multi-view human images captured under the same illumination. Although valuable for modeling tasks, they are not readily used in relighting problems. To promote research in both fields, in this paper, we present UltraStage, a new 3D human dataset that contains more than 2K high-quality human assets captured under both multi-view and multi-illumination settings. Specifically, for each example, we provide 32 surrounding views illuminated with one white light and two gradient illuminations. In addition to regular multi-view images, gradient illuminations help recover detailed surface normal and spatially-varying material maps, enabling various relighting applications. Inspired by recent advances in neural representation, we further interpret each example into a neural human asset which allows novel view synthesis under arbitrary lighting conditions. We show our neural human assets can achieve extremely high capture performance and are capable of representing fine details such as facial wrinkles and cloth folds. We also validate UltraStage in single image relighting tasks, training neural networks with virtual relighted data from neural assets and demonstrating realistic rendering improvements over prior arts. UltraStage will be publicly available to the community to stimulate significant future developments in various human modeling and rendering tasks.
translated by 谷歌翻译
已经观察到,可以从这两种方式中提取视听嵌入,以获得人验证的稳健性。但是,似乎从每个帧中生成单个话语表示的聚合器似乎并未得到很好的探索。在本文中,我们提出了一个视听网络,该网络从融合的角度考虑聚合器。我们首次在面对面验证中引入了改进的细心统计数据。然后,我们发现合并过程中的模式之间存在很强的相关性,因此提出了关节关注的合并,其中包含循环一致性以学习隐式框架间的重量。最后,将这种方式与封闭的注意机制融合在一起。所有提出的型号均在Voxceleb2开发数据集上进行培训,最佳系统分别在Voxceleb1的三个正式步道列表中获得0.18 \%,0.27 \%和0.49 \%EER,据我们所知,这是个人发布的最佳成绩确认。作为分析,生成可视化图来解释该系统如何在模态之间相互作用。
translated by 谷歌翻译
对行人行为的预测对于完全自主车辆安全有效地在繁忙的城市街道上驾驶至关重要。未来的自治车需要适应混合条件,不仅具有技术还是社会能力。随着更多算法和数据集已经开发出预测行人行为,这些努力缺乏基准标签和估计行人的时间动态意图变化的能力,提供了对交互场景的解释,以及具有社会智能的支持算法。本文提出并分享另一个代表数据集,称为Iupui-CSRC行人位于意图(PSI)数据,除了综合计算机视觉标签之外,具有两种创新标签。第一部小说标签是在自助式车辆前面交叉的行人的动态意图变化,从24个司机中实现了不同的背景。第二个是在估计行人意图并在交互期间预测其行为时对驾驶员推理过程的基于文本的解释。这些创新标签可以启用几个计算机视觉任务,包括行人意图/行为预测,车辆行人互动分割和用于可解释算法的视频到语言映射。发布的数据集可以从根本上从根本上改善行人行为预测模型的发展,并开发社会智能自治车,以有效地与行人进行互动。 DataSet已被不同的任务进行评估,并已释放到公众访问。
translated by 谷歌翻译
在本文中,我们利用了最近的物理信息神经网络(PINN)的进步,并开发了一种基于通用的Pinn的框架,以评估多状态系统(MSS)的可靠性。提议的方法包括两个主要步骤。在第一步中,我们将MS的可靠性评估作为使用Pinn框架的机器学习问题。构建具有两个单独损耗组的前馈神经网络以编码由MS中的常微分方程(ODES)管理的初始条件和状态转换。接下来,从多任务学习的角度来看,我们解决了Pinn中的背部传播梯度大小的高不平衡问题。特别是,我们将损失函数中的每个元素视为个别任务,采用名为Projecting冲突渐变(PCGRAD)的梯度手术方法,其中任务的渐变将投影到具有冲突梯度的任何其他任务的常规平面上。梯度投影操作显着降低了训练销时梯度干扰引起的有害影响,从而将PINN的收敛速度加速到高精度解决方案到MSS可靠性评估。通过提出的基于Pinn的框架,我们在几乎不受时间或依赖状态转换和系统尺度从小到介质时,研究其对MSS可靠性评估的应用程序的应用。结果表明,基于Pinn的框架在MSS可靠性评估中显示了通用和显着性能,并且Pinn中的PCGrad掺入了溶液质量和收敛速度的大量提高。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译